132 research outputs found

    Reinforcement learning in dendritic structures

    Get PDF
    The discovery of binary dendritic events such as local NMDA spikes in dendritic subbranches led to the suggestion that dendritic trees could be computationally equivalent to a 2-layer network of point neurons, with a single output unit represented by the soma, and input units represented by the dendritic branches. Although this interpretation endows a neuron with a high computational power, it is functionally not clear why nature would have preferred the dendritic solution with a single but complex neuron, as opposed to the network solution with many but simple units. We show that the dendritic solution has a distinguished advantage over the network solution when considering different learning tasks. Its key property is that the dendritic branches receive an immediate feedback from the somatic output spike, while in the corresponding network architecture the feedback would require additional backpropagating connections to the input units. Assuming a reinforcement learning scenario we formally derive a learning rule for the synaptic contacts on the individual dendritic trees which depends on the presynaptic activity, the local NMDA spikes, the somatic action potential, and a delayed reinforcement signal. We test the model for two scenarios: the learning of binary classifications and of precise spike timings. We show that the immediate feedback represented by the backpropagating action potential supplies the individual dendritic branches with enough information to efficiently adapt their synapses and to speed up the learning process

    Dendritic Morphology Predicts Pattern Recognition Performance in Multi-compartmental Model Neurons with and without Active Conductances

    Get PDF
    This is an Open Access article published under the Creative Commons Attribution license CC BY 4.0 which allows users to read, copy, distribute and make derivative works, as long as the author of the original work is citedIn this paper we examine how a neuron’s dendritic morphology can affect its pattern recognition performance. We use two different algorithms to systematically explore the space of dendritic morphologies: an algorithm that generates all possible dendritic trees with 22 terminal points, and one that creates representative samples of trees with 128 terminal points. Based on these trees, we construct multi-compartmental models. To assess the performance of the resulting neuronal models, we quantify their ability to discriminate learnt and novel input patterns. We find that the dendritic morphology does have a considerable effect on pattern recognition performance and that the neuronal performance is inversely correlated with the mean depth of the dendritic tree. The results also reveal that the asymmetry index of the dendritic tree does not correlate with the performance for the full range of tree morphologies. The performance of neurons with dendritic tapering is best predicted by the mean and variance of the electrotonic distance of their synapses to the soma. All relationships found for passive neuron models also hold, even in more accentuated form, for neurons with active membranesPeer reviewedFinal Published versio

    Balancing family with a successful career in neuroscience.

    Get PDF
    After years of hard work as a student and postdoc, stressful negotiations and restless nights of agony regarding your academic future, you managed to secure a Principal Investigator (PI) position and establish your own laboratory. And just when you thought you could relax a bit and enjoy some time with your family, or start a family, you find yourself facing massive levels of responsibility added to your research, that demand most of your time and energy. The challenge to balance a successful career with a happy family life is not a trivial one.The FENS‐Kavli Network of Excellence is supported by FENS, the Kavli Foundation, Alzheimer's Research UK, the European Molecular Biology Organization (EMBO) and Roche. P.P. acknowledges funding from the European Research Council (StG 311435 dEMORY). J.G. is supported by the European Research Council (StG 678832 REMOTE MEMORY TRACES), an MQ fellow award, the Swiss National Science Foundation, the National Competence Center for Research in Switzerland SYNAPSY, the SYNAPSIS Foundation, the Béatrice Ederer‐Weber Stiftung, the Alzheimer's Association as well as by an Independent Investigator Award from the Brain and Behavior Research Foundation. I.H.‐O. is supported by the ERC Consolidator Grant CoG 681577 PSYCHOCELL. D.B. is supported by the Wellcome Trust. G.L.‐B. is supported by the European Research Council, ERC‐2014‐CoG 647012 and the Spanish MINECO BFU2012‐34298 grant

    Hippocampal GABAergic interneurons and memory

    Get PDF
    One of the most captivating questions in neuroscience revolves around the brain's ability to efficiently and durably capture and store information. It must process continuous input from sensory organs while also encoding memories that can persist throughout a lifetime. What are the cellular-, subcellular-, and network-level mechanisms that underlie this remarkable capacity for long-term information storage? Furthermore, what contributions do distinct types of GABAergic interneurons make to this process? As the hippocampus plays a pivotal role in memory, our review focuses on three aspects: (1) delineation of hippocampal interneuron types and their connectivity, (2) interneuron plasticity, and (3) activity patterns of interneurons during memory-related rhythms, including the role of long-range interneurons and disinhibition. We explore how these three elements, together showcasing the remarkable diversity of inhibitory circuits, shape the processing of memories in the hippocampus

    Recall Performance Improvement in a Bio-Inspired Model of the Mammalian Hippocampus

    Get PDF
    Mammalian hippocampus is involved in short-term formation of declarative memories. We employed a bio-inspired neural model of hippocampal CA1 region consisting of a zoo of excitatory and inhibitory cells. Cells’ firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. To systematically evaluate the model’s recall performance against number of stored patterns, overlaps and ‘active cells per pattern’, its cells were driven by a non-specific excitatory input to their dendrites. This excitatory input to model excitatory cells provided context and timing information for retrieval of previously stored memory patterns. Inhibition to excitatory cells’ dendrites acted as a non-specific global threshold machine that removed spurious activity during recall. Out of the three models tested, ‘model 1’ recall quality was excellent across all conditions. ‘Model 2’ recall was the worst. The number of ‘active cells per pattern’ had a massive effect on network recall quality regardless of how many patterns were stored in it. As ‘active cells per pattern’ decreased, network’s memory capacity increased, interference effects between stored patterns decreased, and recall quality improved. Key finding was that increased firing rate of an inhibitory cell inhibiting a network of excitatory cells has a better success at removing spurious activity at the network level and improving recall quality than increasing the synaptic strength of the same inhibitory cell inhibiting the same network of excitatory cells, while keeping its firing rate fixed

    Active dendrites enhance neuronal dynamic range

    Get PDF
    Since the first experimental evidences of active conductances in dendrites, most neurons have been shown to exhibit dendritic excitability through the expression of a variety of voltage-gated ion channels. However, despite experimental and theoretical efforts undertaken in the last decades, the role of this excitability for some kind of dendritic computation has remained elusive. Here we show that, owing to very general properties of excitable media, the average output of a model of active dendritic trees is a highly non-linear function of their afferent rate, attaining extremely large dynamic ranges (above 50 dB). Moreover, the model yields double-sigmoid response functions as experimentally observed in retinal ganglion cells. We claim that enhancement of dynamic range is the primary functional role of active dendritic conductances. We predict that neurons with larger dendritic trees should have larger dynamic range and that blocking of active conductances should lead to a decrease of dynamic range.Comment: 20 pages, 6 figure
    corecore